responsible ai system
Design and Validation of a Responsible Artificial Intelligence-based System for the Referral of Diabetic Retinopathy Patients
Moya-Sánchez, E. Ulises, Sánchez-Perez, Abraham, Da Veiga, Raúl Nanclares, Zarate-Macías, Alejandro, Villareal, Edgar, Sánchez-Montes, Alejandro, Jauregui-Ulloa, Edtna, Moreno, Héctor, Cortés, Ulises
Diabetic Retinopathy (DR) is a leading cause of vision loss in working-age individuals. Early detection of DR can reduce the risk of vision loss by up to 95%, but a shortage of retinologists and challenges in timely examination complicate detection. Artificial Intelligence (AI) models using retinal fundus photographs (RFPs) offer a promising solution. However, adoption in clinical settings is hindered by low-quality data and biases that may lead AI systems to learn unintended features. To address these challenges, we developed RAIS-DR, a Responsible AI System for DR screening that incorporates ethical principles across the AI lifecycle. RAIS-DR integrates efficient convolutional models for preprocessing, quality assessment, and three specialized DR classification models. We evaluated RAIS-DR against the FDA-approved EyeArt system on a local dataset of 1,046 patients, unseen by both systems. RAIS-DR demonstrated significant improvements, with F1 scores increasing by 5-12%, accuracy by 6-19%, and specificity by 10-20%. Additionally, fairness metrics such as Disparate Impact and Equal Opportunity Difference indicated equitable performance across demographic subgroups, underscoring RAIS-DR's potential to reduce healthcare disparities. These results highlight RAIS-DR as a robust and ethically aligned solution for DR screening in clinical settings. The code, weights of RAIS-DR are available at https://gitlab.com/inteligencia-gubernamental-jalisco/jalisco-retinopathy with RAIL.
- North America > United States (0.89)
- Oceania > Australia (0.04)
- North America > Mexico > Jalisco > Guadalajara (0.04)
- Europe > Spain > Catalonia > Barcelona Province > Barcelona (0.04)
- Health & Medicine > Therapeutic Area > Ophthalmology/Optometry (1.00)
- Health & Medicine > Therapeutic Area > Endocrinology > Diabetes (0.90)
- Government > Regional Government > North America Government > United States Government > FDA (0.89)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
- Information Technology > Artificial Intelligence > Vision (0.93)
Leveraging Professional Ethics for Responsible AI
Artificial Intelligence (AI) is proliferating throughout society, but so too are calls for practicing Responsible AI.4 The ACM Code of Ethics and Professional Conduct states computing professionals should contribute to society and human well-being (General Ethical Principle 1.1), but it can be difficult for a computer scientist to judge the impacts of a particular application in all fields. AI is influencing a range of social domains from law and medicine to journalism, government, and education. Technologists do not just need to make the technology work and scale it up, they must make it work while also being responsible for a host of societal, ethical, legal, and other human-centered concerns in these domains.11 There is no shortcut to becoming an expert social scientist, ethicist, or legal scholar.
- North America > United States > Illinois > Cook County > Evanston (0.04)
- Europe > United Kingdom > England > Greater London > London (0.04)
- Europe > Norway > Western Norway > Vestland > Bergen (0.04)
- (2 more...)
Normative Ethics Principles for Responsible AI Systems: Taxonomy and Future Directions
Woodgate, Jessica, Ajmeri, Nirav
Responsible AI must be able to make decisions that consider human values and can be justified by human morals. Operationalising normative ethical principles inferred from philosophy supports responsible reasoning. We survey computer science literature and develop a taxonomy of 23 normative ethical principles which can be operationalised in AI. We describe how each principle has previously been operationalised, highlighting key themes that AI practitioners seeking to implement ethical principles should be aware of. We envision that this taxonomy will facilitate the development of methodologies to incorporate normative ethical principles in responsible AI systems.
- North America > United States > Virginia (0.04)
- Oceania > New Zealand > North Island > Auckland Region > Auckland (0.04)
- North America > United States > Hawaii > Honolulu County > Honolulu (0.04)
- (14 more...)
- Overview (1.00)
- Research Report (0.82)
Connecting the Dots in Trustworthy Artificial Intelligence: From AI Principles, Ethics, and Key Requirements to Responsible AI Systems and Regulation
Díaz-Rodríguez, Natalia, Del Ser, Javier, Coeckelbergh, Mark, de Prado, Marcos López, Herrera-Viedma, Enrique, Herrera, Francisco
Trustworthy Artificial Intelligence (AI) is based on seven technical requirements sustained over three main pillars that should be met throughout the system's entire life cycle: it should be (1) lawful, (2) ethical, and (3) robust, both from a technical and a social perspective. However, attaining truly trustworthy AI concerns a wider vision that comprises the trustworthiness of all processes and actors that are part of the system's life cycle, and considers previous aspects from different lenses. A more holistic vision contemplates four essential axes: the global principles for ethical use and development of AI-based systems, a philosophical take on AI ethics, a risk-based approach to AI regulation, and the mentioned pillars and requirements. The seven requirements (human agency and oversight; robustness and safety; privacy and data governance; transparency; diversity, non-discrimination and fairness; societal and environmental wellbeing; and accountability) are analyzed from a triple perspective: What each requirement for trustworthy AI is, Why it is needed, and How each requirement can be implemented in practice. On the other hand, a practical approach to implement trustworthy AI systems allows defining the concept of responsibility of AI-based systems facing the law, through a given auditing process. Therefore, a responsible AI system is the resulting notion we introduce in this work, and a concept of utmost necessity that can be realized through auditing processes, subject to the challenges posed by the use of regulatory sandboxes. Our multidisciplinary vision of trustworthy AI culminates in a debate on the diverging views published lately about the future of AI. Our reflections in this matter conclude that regulation is a key for reaching a consensus among these views, and that trustworthy and responsible AI systems will be crucial for the present and future of our society.
Tailoring Requirements Engineering for Responsible AI
Maalej, Walid, Pham, Yen Dieu, Chazette, Larissa
Requirements Engineering (RE) is the discipline for identifying, analyzing, as well as ensuring the implementation and delivery of user, technical, and societal requirements. Recently reported issues concerning the acceptance of Artificial Intelligence (AI) solutions after deployment, e.g. in the medical, automotive, or scientific domains, stress the importance of RE for designing and delivering Responsible AI systems. In this paper, we argue that RE should not only be carefully conducted but also tailored for Responsible AI. We outline related challenges for research and practice.
- North America > United States > New York > New York County > New York City (0.04)
- South America > Brazil (0.04)
- Europe > Germany > Lower Saxony > Hanover (0.04)
- Europe > Germany > Hamburg (0.04)
- Health & Medicine (1.00)
- Information Technology (0.68)
Open-Source Toolkits To Develop Responsible AI Systems In 2023
Artificial intelligence is advancing rapidly, with recent estimates by PwC predicting that AI will contribute 13.7 trillion USD to the world economy by 2030. Consequently, there is a need for open-source toolkits that help developers build responsible AI systems. However, it is also critical to understand the implications of developing and deploying AI systems responsibly and ethically. The increased use of artificial intelligence (AI) has led to concerns about its potential impact on society. One way to mitigate these concerns is to develop responsible AI systems that consider the ethical principles of beneficence, non-maleficence, autonomy, and justice. There are several open-source toolkits available that can be used to develop responsible AI systems.
Getting a Read on Responsible AI
There is great promise and potential in artificial intelligence (AI), but if such technologies are built and trained by humans, are they capable of bias? Absolutely, says William Wang, the Duncan and Suzanne Mellichamp Chair in Artificial Intelligence and Designs at UC Santa Barbara, who will give the virtual talk "What is Responsible AI," at 4 p.m. Tuesday, Jan. 25, as part of the UCSB Library's Pacific Views speaker series (register here). "The key challenge for building AI and machine learning systems is that when such a system is trained on datasets with limited samples from history, they may gain knowledge from the protected variables (e.g., gender, race, income, etc.), and they are prone to produce biased outputs," said Wang, also director of UC Santa Barbara's Center for Responsible Machine Learning. "Sometimes these biases could lead to the'rich getting richer' phenomenon after the AI systems are deployed," he added. "That's why in addition to accuracy, it is important to conduct research in fair and responsible AI systems, including the definition of fairness, measurement, detection and mitigation of biases in AI systems."
The State of AI Ethics Report (Volume 4)
Gupta, Abhishek, Royer, Alexandrine, Wright, Connor, Heath, Victoria, Fancy, Muriam, Ganapini, Marianna Bergamaschi, Egan, Shannon, Sweidan, Masa, Akif, Mo, Butalid, Renjie
The 4th edition of the Montreal AI Ethics Institute's The State of AI Ethics captures the most relevant developments in the field of AI Ethics since January 2021. This report aims to help anyone, from machine learning experts to human rights activists and policymakers, quickly digest and understand the ever-changing developments in the field. Through research and article summaries, as well as expert commentary, this report distills the research and reporting surrounding various domains related to the ethics of AI, with a particular focus on four key themes: Ethical AI, Fairness & Justice, Humans & Tech, and Privacy. In addition, The State of AI Ethics includes exclusive content written by world-class AI Ethics experts from universities, research institutes, consulting firms, and governments. Opening the report is a long-form piece by Edward Higgs (Professor of History, University of Essex) titled "AI and the Face: A Historian's View." In it, Higgs examines the unscientific history of facial analysis and how AI might be repeating some of those mistakes at scale. The report also features chapter introductions by Alexa Hagerty (Anthropologist, University of Cambridge), Marianna Ganapini (Faculty Director, Montreal AI Ethics Institute), Deborah G. Johnson (Emeritus Professor, Engineering and Society, University of Virginia), and Soraj Hongladarom (Professor of Philosophy and Director, Center for Science, Technology and Society, Chulalongkorn University in Bangkok). This report should be used not only as a point of reference and insight on the latest thinking in the field of AI Ethics, but should also be used as a tool for introspection as we aim to foster a more nuanced conversation regarding the impacts of AI on the world.
- North America > United States (1.00)
- Asia (1.00)
- Africa (1.00)
- (2 more...)
- Summary/Review (1.00)
- Research Report > New Finding (1.00)
- Questionnaire & Opinion Survey (1.00)
- (4 more...)
- Social Sector (1.00)
- Media > News (1.00)
- Leisure & Entertainment > Sports (1.00)
- (17 more...)